翻訳と辞書
Words near each other
・ Semiconductor luminescence equations
・ Semiconductor Manufacturing International Corporation
・ Semiconductor memory
・ Semiconductor optical gain
・ Semiconductor package
・ Semiconductor process simulation
・ Semiconductor Research Corporation
・ Semiconductor ring laser
・ Semiconductor sales leaders by year
・ Semiconductor Science and Technology
・ Semiconservative replication
・ Semicossyphus
・ Semicubical parabola
・ Semicyclocephalus
・ Semide
Semidefinite embedding
・ Semidefinite programming
・ Semidelitschia
・ Semidevilish
・ Semidi Islands
・ Semidiameter
・ Semidirect product
・ Semidocumentary
・ Semie Moseley
・ Semielacher
・ Semiembossed film
・ SemiEmpirical Energy Based
・ Semien
・ Semien Achefer
・ Semien Bench


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Semidefinite embedding : ウィキペディア英語版
Semidefinite embedding
Semidefinite embedding (SDE) or maximum variance unfolding (MVU) is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data. MVU can be viewed as a non-linear generalization of Principal component analysis.
Non-linear dimensionality reduction algorithms attempt to map high-dimensional data onto a low-dimensional Euclidean vector space. Maximum variance Unfolding is a member of the manifold learning family, which also include algorithms such as isomap and locally linear embedding. In manifold learning, the input data is assumed to be sampled from a low dimensional manifold that is embedded inside of a higher-dimensional vector space. The main intuition behind MVU is to exploit the local linearity of manifolds and create a mapping that preserves local neighborhoods at every point of the underlying manifold.
MVU creates a mapping from the high dimensional input vectors to some low dimensional Euclidean vector space in the following steps:
A neighborhood graph is created. Each input is connected with its k-nearest input vectors (according to Euclidean distance metric) and all k-nearest neighbors are connected with each other. If the data is sampled well enough, the resulting graph is a discrete approximation of the underlying manifold.
The neighborhood graph is "unfolded" with the help of semidefinite programming. Instead of learning the output vectors directly, the semidefinite programming aims to find an inner product matrix that maximizes the pairwise distances between any two inputs that are not connected in the neighborhood graph while preserving the nearest neighbors distances.
The low-dimensional embedding is finally obtained by application of multidimensional scaling on the learned inner product matrix.
The steps of applying semidefinite programming followed by a linear dimensionality reduction step to recover a low-dimensional embedding into a Euclidean space were first proposed by Linial, London, and Rabinovich.
==Optimization Formulation==

Let X \,\! be the original input and Y\,\! be the embedding. If i,j\,\! are two neighbors, then the local isometry constraint that needs to be satisfied is:
:|X_-X_|^=|Y_-Y_|^\,\!
Let G, K\,\! be the Gram matrices of X \,\! and Y \,\! (i.e.: G_=X_i \cdot X_j,K_=Y_i \cdot Y_j \,\!). We can express the above constraint for every neighbor points i,j\,\! in term of G, K\,\!:
:G_+G_-G_-G_=K_+K_-K_-K_\,\!
In addition, we also want to constrain the embedding Y \,\! to center at the origin:
\sum_Y_=0\Leftrightarrow(\sum_Y_) \cdot (\sum_Y_)=0\Leftrightarrow\sum_Y_ \cdot Y_=0\Leftrightarrow\sum_K_=0
As described above, except the distances of neighbor points are preserved, the algorithm aims to maximize the pairwise distance of every pair of points. The objective function to be maximized is:
T(Y)=\dfrac\sum_|Y_-Y_|^
Intuitively, maximizing the function above is equivalent to pulling the points as far away from each other as possible and therefore "unfold" the manifold. The local isometry constraint prevents the objective function from going to infinity. Proof:
Let \tau = max \-Y_|^2\} \,\! where \eta_ = 1 \,\! if i and j are neighbors and \eta_ = 0 \,\! otherwise.
Since the graph has N points, the distance between any two points |Y_-Y_|^2 \leq N \tau \,\!. We can then bound the objective function as follow:
:T(Y)=\dfrac\sum_|Y_-Y_|^ \leq \dfrac\sum_(N\tau)^2 = \dfrac \,\!
The objective function can be rewritten purely in the form of the Gram matrix:
:
\begin
T(Y) &\sum_|Y_-Y_|^ \\
&(\sum_(Y_^2+Y_^2-Y_ \cdot Y_ - Y_ \cdot Y_)\\
&(\sum_Y_^2+\sum_Y_^2-\sum_Y_ \cdot Y_ -\sum_Y_ \cdot Y_)\\
&(\sum_Y_^2+\sum_Y_^2-0 -0)\\
&(\sum_Y_^2)=\dfrac(Tr(K))\\
\end
\,\!
Finally, the optimization can be formulated as:
Maximize Tr(K) \,\!
Subject to K \succeq 0\,\! and
\forall i,j \,\! where \eta_ =1, G_+G_-G_-G_=K_+K_-K_-K_ \,\!
After the Gram matrix K \,\! is learned by semidefinite programming, the output Y \,\! can be obtained via Cholesky decomposition. In particular, the Gram matrix can be written as K_=\sum_^(\lambda_ V_ V_) \,\! where V_ \,\! is the i-th element of eigenvector V_ \,\! of the eigenvalue \lambda_ \,\!.
It follows that the \alpha \,\!-th element of the output Y_i \,\! is \sqrt \,\!.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Semidefinite embedding」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.